Taken from the course summary on Weboodi:
See the THINK OPEN blog by Kimmo Vehkalahti at https://blogs.helsinki.fi/thinkopen/welcome-open-data-science/
The name of this course refers to THREE BIG THEMES: 1) Open Data, 2) Open Science, and 3) Data Science. These themes are summarized briefly as follows:
1) Open Data
There are more and more open data sets available. Utilizing and sharing data is an essential skill for researchers in all fields. During this course we use open data sets from different sources and learn to prepare them for different analyses. You will explore, analyse and interpret data from real world applications.
2) Open Science
Science thrives to be open. Repeating or reproducing the results is a common aim in any branch of science, but it is not always easy or simple. Sharing data is not enough for reproducibility. What is also needed, is using openly available software tools and methods as well as sharing your code and results. You will learn these skills during this course, using state-of-the-art tools.
3) Data Science
Data Science is the name for the data driven world of Statistics. Nowadays, finding or collecting data is not a problem. Instead, the challenges are in extracting knowledge and discovering the patterns behind the data. It requires skills of coding, programming, and modelling, as well as visualizing and analysing. You will face all these topics on this course.
We are quite excited about this course! So come along! Together we’ll guide you through these themes.
Welcome to the course! :)
# This is a so-called "R chunk" where you can write R code.
date()
## [1] "Fri Dec 4 00:25:57 2020"
First time combining the use of the following tools for this Introduction to Open Data Science (IODS) course:
Feeling very excited to embrace the future!
IODS project GitHub repository: https://github.com/mlammins/IODS-project
“By definition all scientists are data scientists. In my opinion, they are half hacker, half analyst, they use data to build products and find insights. It’s Columbus meet Columbo ― starry-eyed explorers and skeptical detectives.” ―Monica Rogati, Independent Data Science Advisor
Intro from the IODS MOOC-page:
How to predict values of one random variable based on information from other variables? This is a basic question behind a statistical model, where we try to reveal something about the causal relation between different matters of life on Earth - or in the Universe. Regression analysis has its roots in the early 19th century, and it is still going strong!
There are dozens of variations of the regression model, depending on the types of variables, the nature of the data, and the research design. We start from the simple linear regression and proceed to its extended form, the multiple linear regression. In addition, we check the validity of the assumptions that we make about the model(s), by investigating the so called model diagnostics.
The background of the data as described by the author:
Kimmo Vehkalahti: ASSIST 2014 - Phase 3 (end of Part 2), N=183 Course: Johdatus yhteiskuntatilastotieteeseen, syksy 2014 (Introduction to Social Statistics, fall 2014 - in Finnish), international survey of Approaches to Learning, made possible by Teachers’ Academy funding for KV in 2013-2015.
Data collected: 3.12.2014 - 10.1.2015/KV Data created: 14.1.2015/KV, in English 9.4.2015/KV,Florence,Italy Imputation 4.4.2015: only missing information in certain backgrounds, minimal amount of missing values imputed using Phases 1 and 2.
For more information, see https://www.mv.helsinki.fi/home/kvehkala/JYTmooc/JYTOPKYS3-meta.txt
learning2014 <- read.csv("./data/learning2014.csv") # reading the analysis data
learning2014$gender <- as.factor(learning2014$gender) # change to factor
dim(learning2014) # number of rows and columns
## [1] 166 7
str(learning2014) # type of data
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
The learning dataset used in this exercise consists of 166 observations and 7 variables. Variables deep, stra and surf were calculated based on several Likert scale questions (from 1 to 5). Variable names and short descriptions:
To get the idea of the data, let’s make a graphical summary of the variables with females (red) and males (blue):
library(ggplot2);
library(GGally);
p <- ggpairs(learning2014, mapping = aes(col=gender, alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)), upper = list(continuous = wrap("cor", family="sans")))
p # graphical summary
summary(learning2014) # numerical summary
## gender age attitude deep stra
## F:110 Min. :17.00 Min. :1.400 Min. :1.583 Min. :1.250
## M: 56 1st Qu.:21.00 1st Qu.:2.600 1st Qu.:3.333 1st Qu.:2.625
## Median :22.00 Median :3.200 Median :3.667 Median :3.188
## Mean :25.51 Mean :3.143 Mean :3.680 Mean :3.121
## 3rd Qu.:27.00 3rd Qu.:3.700 3rd Qu.:4.083 3rd Qu.:3.625
## Max. :55.00 Max. :5.000 Max. :4.917 Max. :5.000
## surf points
## Min. :1.583 Min. : 7.00
## 1st Qu.:2.417 1st Qu.:19.00
## Median :2.833 Median :23.00
## Mean :2.787 Mean :22.72
## 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :4.333 Max. :33.00
Taking a look at the graphical overview, the distribution for female (red) and male (blue) values seems to be relatively similar in all categories. Females seem to have a slightly higher values in surface learning (surf) and strategic learning (stra) and slighly lower values in attitude. The numerical summary of all data (not filtered by gender) shows that the majority of participants is young (under 30 years of age).
Next, let’s test if there is correlation between attitude, strategic learning (stra) and surface learning tendency (surf) with the number of points obtained from the exam:
library(ggplot2)
# create a regression model with multiple explanatory variables
my_model <- lm(points ~ attitude + stra + surf, data = learning2014)
# print out a summary of the model
summary(my_model)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
Coefficients table gives an interpretation of the model. Intercept giving the “baseline” for the points in exam, it seems that attitude has the biggest correlation between exam points. If attitude rises one unit, then exam points increase by 3.4 units given that all other variables stay the same. The effect of stra and surf is below one. The importance of attitude can also be seen in the last column where the statistical significance of the coefficient is given. The three stars *** indicate high statistical significance, i.e. the coefficient differs from zero and thus has a relationship with the target variable. In general, the P-test value shown here tells the propability of the coefficient being zero.
Let’s drop surf (highest propability) to see if it improves the fitting:
# create a regression model with multiple explanatory variables
my_model2 <- lm(points ~ attitude + stra, data = learning2014)
# print out a summary of the model
summary(my_model2)
##
## Call:
## lm(formula = points ~ attitude + stra, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.6436 -3.3113 0.5575 3.7928 10.9295
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.9729 2.3959 3.745 0.00025 ***
## attitude 3.4658 0.5652 6.132 6.31e-09 ***
## stra 0.9137 0.5345 1.709 0.08927 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared: 0.2048, Adjusted R-squared: 0.1951
## F-statistic: 20.99 on 2 and 163 DF, p-value: 7.734e-09
Leaving out surface learning increases the significance of the remaining variables (decreased values in P-value column). However, the value for strategic learning (stra) is still relatively high (debatable). As it is less than < 0.10, let’s keep it in the model. So we have found our final model.
Our final model is thus
exam points = 8.97 + 3.47 x attitude + 0.91 x stra
This means that with one unit increase in attitude exam points increase by 3.47 points (stra being unchanged) and with one unit increase in strategic learning (stra) the exam points increase by 0.91. The baseline for exam points is 8.97 (the y-intercept). Now this is the systemic part (no error).
The numbers at the end of the multiple regression summary can be explained succinctly as:
So adjusted R-squared tells how well the model fits the data, i.e. the percentage of the dependent variable variation that the linear model explains (ranging between 0 and 1). The R-squared seen here (roughly 0.20) is quite low so there is probably some problematic patterns in the residual plots. At least the residuals are not very close to the regression line. That’s why you cannot rely on the R-squared number alone but a visual inspection is a must!
# drawing diagnostic plots using the plot() function. Choose the plots 1, 2 and 5:
par(mfrow = c(2,2))
plot(my_model2, which=c(1,2,5))
A statistical model always include assumptions which describe the data generation process. In this linear regression case we assume
These assumptions can be checked through analyzing residuals.
In residuals vs. fitted we can see the source of the low R-squared value: the residuals are not very close to the regression line (note y-axis scale). However, they seem to describe the trend in observations well since there is no noticeable pattern. QQ-plot also shows that the residuals are nicely located on the line, i.e. the normality assumption holds. Residuals vs. leverage shows some points at the right hand side of the plot but the x-axis scaling (max leverage value of 0.05) is still quite small, so no problems here either. All in all, it seems our multiple regression model nicely describes the data and all the assumptions hold.
With the model validated, now we could use our regression model to predict the target variable behavior!
Intro from the IODS MOOC-page:
One way to move on from linear regression is to consider settings where the dependent (target) variable is discrete. This opens a wide range of possibilities for modelling phenomena beyond the assumptions of continuity or normality.
Logistic regression is a powerful method that is well suited for predicting and classifying data by working with probabilities. It belongs to a large family of statistical models called Generalized Linear Models (GLM). An important special case that involves a binary target (taking only the values 0 or 1) is the most typical and popular form of logistic regression.
We will learn the concept of odds ratio (OR), which helps to understand and interpret the estimated coefficients of a logistic regression model. We also take a brief look at cross-validation, an important principle and technique for assessing the performance of a statistical model with another data set, for example by splitting the data into a training set and a testing set.
The dataset describes student achievement in secondary education of two Portuguese schools. The dataset is joined from two datasets regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). The data attributes include student grades, demographic, social and school related features (and especially alcohol consumption) and it was collected by using school reports and questionnaires.
More information: https://archive.ics.uci.edu/ml/datasets/Student+Performance
alc <- read.csv("./data/alc.csv") # reading the analysis data
dim(alc) # number of rows and columns
## [1] 382 35
str(alc) # type of data
## 'data.frame': 382 obs. of 35 variables:
## $ school : chr "GP" "GP" "GP" "GP" ...
## $ sex : chr "F" "F" "F" "F" ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : chr "U" "U" "U" "U" ...
## $ famsize : chr "GT3" "GT3" "LE3" "GT3" ...
## $ Pstatus : chr "A" "T" "T" "T" ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : chr "at_home" "at_home" "at_home" "health" ...
## $ Fjob : chr "teacher" "other" "other" "services" ...
## $ reason : chr "course" "course" "other" "home" ...
## $ nursery : chr "yes" "no" "yes" "yes" ...
## $ internet : chr "no" "yes" "yes" "yes" ...
## $ guardian : chr "mother" "father" "mother" "mother" ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : chr "yes" "no" "yes" "no" ...
## $ famsup : chr "no" "yes" "no" "yes" ...
## $ paid : chr "no" "no" "yes" "yes" ...
## $ activities: chr "no" "no" "no" "yes" ...
## $ higher : chr "yes" "yes" "yes" "yes" ...
## $ romantic : chr "no" "no" "no" "yes" ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
The final joined dataset has 382 observations and 35 variables consisting of only unique individuals. The datasets were joined by using the 13 student identifier variables: “school”, “sex”, “age”, “address”, “famsize”, “Pstatus”, “Medu”, “Fedu”, “Mjob”, “Fjob”, “reason”, “nursery” and “internet”. Only students present in both datasets were kept. The variables not used for joining the two data have been combined by averaging (including the grade variables). A more detailed information about the variables is presented below (possible values in parenthesis):
The grades G1, G2 and G3 are related to the course subject, Math or Portuguese. Variables alc_use and high_use were added to the original datasets:
The purpose of the analysis is to study the relationships between high/low alcohol consumption and some of the other variables in the data. Out of the many variables present, parent’s cohabitation status, mother’s education, quality of family relationships and the number of school absences were chosen. Thus the study hypothesis are as follows:
Pstatus=‘A’), the alcohol consumption (high_use) is higherMedu), the lower the alcohol consumption (high_use)famrel), the lower the alcohol consumption (high_use)absences), the higher the alcohol consumption (high_use)Note! Now the target variable high_use is a binary variable (TRUE=1, FALSE=0) so we must use logistic regression.
Before making logistic regression, let’s explore the distribution of chosen variables and their connection to the target variable numerically and graphically. First, let’s take a look at variable distributions:
library(dplyr);
# use dplyr to make a smaller dataset and include all chosen variables
alc_test <- select(alc, Pstatus, Medu, famrel, absences, alc_use, high_use)
# change Pstatus from char to factor
alc_test$Pstatus <- as.factor(alc_test$Pstatus)
# numerical summary of chosen variables
summary(alc_test[-c(5,6)])
## Pstatus Medu famrel absences
## A: 38 Min. :0.000 Min. :1.000 Min. : 0.0
## T:344 1st Qu.:2.000 1st Qu.:4.000 1st Qu.: 1.0
## Median :3.000 Median :4.000 Median : 3.0
## Mean :2.806 Mean :3.937 Mean : 4.5
## 3rd Qu.:4.000 3rd Qu.:5.000 3rd Qu.: 6.0
## Max. :4.000 Max. :5.000 Max. :45.0
# graphical exploration of variable distribution
par(mfrow = c(2,2))
#
barplot(table(alc_test$Pstatus), main="Distribution of Pstatus")
barplot(table(alc_test$Medu), main="Distribution of Medu")
barplot(table(alc_test$famrel), main="Distribution of famrel")
barplot(table(alc_test$absences), main="Distribution of absences")
From the data we can see that the vast majority of participants are living together with their parents (T). Also, most of the participants have good family relations (famrel=4) and the number of absences is relatively small (75 % less than 6). Mother’s education is almost evenly distributed among variable values not counting zero.
Out of curiosity, let’s see how the alcohol consumption is distributed.
summary(alc_test[c(5,6)])
## alc_use high_use
## Min. :1.000 Mode :logical
## 1st Qu.:1.000 FALSE:268
## Median :1.500 TRUE :114
## Mean :1.889
## 3rd Qu.:2.500
## Max. :5.000
par(mfrow = c(1,2))
barplot(table(alc_test$alc_use), main="Distribution of alc_use")
barplot(table(alc_test$high_use), main="Distribution of high_use")
It seems that only about one third of the participants use alcohol in high volumes. To better grasp the situation, here also the alcohol use (alc_use, numeric from 1 to 5) is shown instead of only the binary variable high use (high_use, TRUE if alc_use > 2).
Now let’s see what is the relation of our chosen variables to alcohol consumption. Here also the numerical alc_use is used.
par(mfrow = c(2,2))
boxplot(alc_use ~ Pstatus, data = alc)
boxplot(alc_use ~ Medu, data=alc)
boxplot(alc_use ~ famrel, data=alc)
boxplot(alc_use ~ absences, data=alc)
These boxplots give a rough idea of the relationships between variables and the target variable:
Time to form a mathematically rigorous model using logistic regression! Note that Pstatus will be addressed here as factor. To summarize:
Target variable = high_use
Chosen variables: Pstatus, Medu, famrel, absences
m <- glm(high_use ~ Pstatus+Medu+famrel+absences, data = alc_test, family ="binomial")
# print out a summary of the model
summary(m)
##
## Call:
## glm(formula = high_use ~ Pstatus + Medu + famrel + absences,
## family = "binomial", data = alc_test)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2741 -0.8107 -0.7076 1.1985 1.8620
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.453190 0.697544 -0.650 0.515890
## PstatusT 0.168029 0.397078 0.423 0.672176
## Medu -0.008959 0.107430 -0.083 0.933542
## famrel -0.243649 0.124088 -1.964 0.049585 *
## absences 0.088049 0.022951 3.836 0.000125 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 443.46 on 377 degrees of freedom
## AIC: 453.46
##
## Number of Fisher Scoring iterations: 4
The most important insight from the model summary is the coefficient section. Using the estimated coefficients, the obtained model is
high_use \(= -0.45+0.17 \times PstatusT-0.01 \times Medu-0.24 \times famrel+0.09 \times absences\)
Since the target valuable high_use is binary, it has only values 0 (as FALSE or “failure”) and 1 (as TRUE or "success) in modelling sense. The coefficients of the fitted model can be thus interpreted as
Of the variables, only famrel and absences were found to be statistically significant on 0.05 and 0.001 levels, respectively.
The model coefficients can also be interpreted as odds ratio.
Odds: the ratio of expected “successes” to “failures”, i.e. \(\frac{p}{1-p}\) with value ranging from 0 to infinity.
So higher odds correspond to a higher probability of success. They are an alternative way of expressing probabilities. Let’s calculate the odds ratios and confidence intervals for the coefficients.
# print out the coefficients of the model
coef(m)
## (Intercept) PstatusT Medu famrel absences
## -0.453190280 0.168028965 -0.008958589 -0.243649169 0.088049068
# compute odds ratios (OR)
OR <- coef(m) %>% exp
# compute confidence intervals (CI)
CI <- confint(m) %>% exp
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.6355972 0.1580208 2.462028
## PstatusT 1.1829709 0.5566928 2.673024
## Medu 0.9910814 0.8032307 1.224977
## famrel 0.7837626 0.6137127 1.000047
## absences 1.0920417 1.0461954 1.144793
Odds ratio can be used to quantify the relationship between our variable and the target variable. Odds higher than 1 mean that the variable is positively associated with “success”. The odds ratio of our variables can be interpreted as
Comparison of the results with our hypotheses:
Let’s improve our model by discarding the most insignificant variables, Pstatus and Medu. Now the final model has only variables famrel and absences.
# fix the model
m2 <- glm(high_use ~ famrel+absences, data = alc_test, family ="binomial")
# print out a summary of the model
summary(m2)
##
## Call:
## glm(formula = high_use ~ famrel + absences, family = "binomial",
## data = alc_test)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.3139 -0.8028 -0.7125 1.2100 1.8605
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.33027 0.50783 -0.650 0.515461
## famrel -0.24109 0.12365 -1.950 0.051211 .
## absences 0.08668 0.02270 3.819 0.000134 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 443.66 on 379 degrees of freedom
## AIC: 449.66
##
## Number of Fisher Scoring iterations: 4
It seems that the significance of variables famrel and absences has increased slightly compared to the original model. However, the Pr(>|z|) values are almost the same and we have a simpler model than before, so this is an improvement.
Let’s use the improved model and predict high_use values of an individual.
# predict() the probability of high_use
probabilities <- predict(m2, type = "response")
# add the predicted probabilities to 'alc_test'
alc_test <- mutate(alc_test, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc_test <- mutate(alc_test, prediction = probabilities>0.5)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc_test, famrel,absences, high_use, probability, prediction) %>% tail(10)
## famrel absences high_use probability prediction
## 373 4 0 FALSE 0.2150733 FALSE
## 374 5 7 TRUE 0.2831342 FALSE
## 375 5 1 FALSE 0.1901522 FALSE
## 376 4 6 FALSE 0.3154941 FALSE
## 377 5 2 FALSE 0.2038593 FALSE
## 378 4 2 FALSE 0.2457776 FALSE
## 379 2 2 FALSE 0.3454525 FALSE
## 380 1 3 FALSE 0.4227907 FALSE
## 381 2 4 TRUE 0.3856256 FALSE
## 382 4 2 TRUE 0.2457776 FALSE
# tabulate the target variable versus the predictions
select(alc_test, high_use, prediction) %>% table()
## prediction
## high_use FALSE TRUE
## FALSE 259 9
## TRUE 101 13
library(ggplot2)
# initialize a plot of 'high_use' versus 'probability' in 'alc_test'
g <- ggplot(alc_test, aes(x =probability, y = high_use, col=prediction))
# define the geom as points and draw the plot
g+geom_point()
# tabulate the target variable versus the predictions
table(high_use = alc_test$high_use, prediction = alc_test$prediction) %>% prop.table() %>% addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.67801047 0.02356021 0.70157068
## TRUE 0.26439791 0.03403141 0.29842932
## Sum 0.94240838 0.05759162 1.00000000
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc_test$high_use, prob = alc_test$probability)
## [1] 0.2879581
To interpret the result, the model ends up predicting wrong 29 % of the time. There is much room for improvement here although it is better than just guessing (50-50 chance). So the model is definitely better than nothing.
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc_test, cost = loss_func, glmfit = m2, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2931937
The prediction error obtained here is larger than in the model introduced in the chapter’s DataCamp session (prediction error of 0.26). Choosing more significant variables in the model would improve the predictions.
Intro from the IODS MOOC-page:
The topics of this chapter - clustering and classification - are handy and visual tools of exploring statistical data. Clustering means that some points (or observations) of the data are in some sense closer to each other than some other points. In other words, the data points do not comprise a homogeneous sample, but instead, it is somehow clustered.
In general, the clustering methods try to find these clusters (or groups) from the data. One of the most typical clustering methods is called k-means clustering. Also hierarchical clustering methods are quite popular, giving tree-like dendrograms as their main output.
As such, clusters are easy to find, but what might be the “right” number of clusters? It is not always clear. And how to give these clusters names and interpretations?
Based on a successful clustering, we may try to classify new observations to these clusters and hence validate the results of clustering. Another way is to use various forms of discriminant analysis, which operates with the (now) known clusters, asking: “what makes the difference(s) between these groups (clusters)?”
In the connection of these methods, we also discuss the topic of distance (or dissimilarity or similarity) measures. There are lots of other measures than just the ordinary Euclidean distance, although it is one of the most important ones. Several discrete and even binary measures exist and are widely used for different purposes in various disciplines.
This chapter’s dataset consists of housing values in suburbs of Boston (the Boston data from the MASS package).
# access the MASS package
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
The dataset contains 506 observations in 14 variables:
The function pairs gives a rough visual idea of the data while summary describes the variables numerically.
library(dplyr)
library(corrplot)
pairs(Boston) # done as in Data Camp, but it is so small, almost impossible to see anything
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Based on the numerical summary, it seems variables crim, zn, indus,dis, rad have quite low values while rm, age and black seem to have higher values. The output of pairs is very difficult to read, so let’s try calculating the correlation matrix to see the relationships between the variables. The (rather large) correlation matrix is easier to interpret when visualized with corrplot-function.
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston)
# print the correlation matrix, kable for nicer looking table
knitr::kable(
cor_matrix %>% round(digits=2)
)
| crim | zn | indus | chas | nox | rm | age | dis | rad | tax | ptratio | black | lstat | medv | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| crim | 1.00 | -0.20 | 0.41 | -0.06 | 0.42 | -0.22 | 0.35 | -0.38 | 0.63 | 0.58 | 0.29 | -0.39 | 0.46 | -0.39 |
| zn | -0.20 | 1.00 | -0.53 | -0.04 | -0.52 | 0.31 | -0.57 | 0.66 | -0.31 | -0.31 | -0.39 | 0.18 | -0.41 | 0.36 |
| indus | 0.41 | -0.53 | 1.00 | 0.06 | 0.76 | -0.39 | 0.64 | -0.71 | 0.60 | 0.72 | 0.38 | -0.36 | 0.60 | -0.48 |
| chas | -0.06 | -0.04 | 0.06 | 1.00 | 0.09 | 0.09 | 0.09 | -0.10 | -0.01 | -0.04 | -0.12 | 0.05 | -0.05 | 0.18 |
| nox | 0.42 | -0.52 | 0.76 | 0.09 | 1.00 | -0.30 | 0.73 | -0.77 | 0.61 | 0.67 | 0.19 | -0.38 | 0.59 | -0.43 |
| rm | -0.22 | 0.31 | -0.39 | 0.09 | -0.30 | 1.00 | -0.24 | 0.21 | -0.21 | -0.29 | -0.36 | 0.13 | -0.61 | 0.70 |
| age | 0.35 | -0.57 | 0.64 | 0.09 | 0.73 | -0.24 | 1.00 | -0.75 | 0.46 | 0.51 | 0.26 | -0.27 | 0.60 | -0.38 |
| dis | -0.38 | 0.66 | -0.71 | -0.10 | -0.77 | 0.21 | -0.75 | 1.00 | -0.49 | -0.53 | -0.23 | 0.29 | -0.50 | 0.25 |
| rad | 0.63 | -0.31 | 0.60 | -0.01 | 0.61 | -0.21 | 0.46 | -0.49 | 1.00 | 0.91 | 0.46 | -0.44 | 0.49 | -0.38 |
| tax | 0.58 | -0.31 | 0.72 | -0.04 | 0.67 | -0.29 | 0.51 | -0.53 | 0.91 | 1.00 | 0.46 | -0.44 | 0.54 | -0.47 |
| ptratio | 0.29 | -0.39 | 0.38 | -0.12 | 0.19 | -0.36 | 0.26 | -0.23 | 0.46 | 0.46 | 1.00 | -0.18 | 0.37 | -0.51 |
| black | -0.39 | 0.18 | -0.36 | 0.05 | -0.38 | 0.13 | -0.27 | 0.29 | -0.44 | -0.44 | -0.18 | 1.00 | -0.37 | 0.33 |
| lstat | 0.46 | -0.41 | 0.60 | -0.05 | 0.59 | -0.61 | 0.60 | -0.50 | 0.49 | 0.54 | 0.37 | -0.37 | 1.00 | -0.74 |
| medv | -0.39 | 0.36 | -0.48 | 0.18 | -0.43 | 0.70 | -0.38 | 0.25 | -0.38 | -0.47 | -0.51 | 0.33 | -0.74 | 1.00 |
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex=0.6)
From the correlation matrix we can see that there is a strong
negative correlation between distance and proportion of units built before 1940, nitrogen oxide concentration and proportion of non-retail business acres (dis&age, dis&nox, dis&indus) as well as median value of home and lower status of the population (medv&lstat).
positive correlation between property tax-rate and accessibility to radial high ways (tax&rad) among others.
Standardization of the data is useful when the data has large differences between their ranges or when the data is measured in different measurement units. Let’s scale the Boston data by subtracting the column means from the corresponding columns and divide the difference with standard deviation: \[scaled(x) = \frac{x-mean(x)}{sd(x)}.\] This is one of the most popular way of standardizing data, the Z-score. Now all variables have a mean of zero and a standard deviation of one. Thus they are on the same scale.
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
Next we shall create a categorical variable crime (per capita crime rate by town) from the standardized data set. Let’s cut the variable by quantiles to get the high, low and middle rates of crime into their own categories. Finally, let’s drop the old crime rate variable from the data set.
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=c("low","med_low","med_high","high"))
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
In order to test the predictive power of a statistical method, let’s divide the scaled Boston data set randomly into a training set (80 %) and a test set (20 %).
# number of rows in the Boston dataset
n <- nrow(Boston)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
The standardization was done to satisfy the conditions for using the linear discriminant analysis (LDA):
variables are normally distributed (on condition of the classes)
each variable has the same variance.
The general idea is to reduce the dimensions by removing redundant and dependent features by transforming the features from higher dimensional space to a space with lower dimensions.
Now, let’s fit LDA on the train set with the newly-created crime as the target variable and all other variables as predictor variables. The result can be visualised by the LDA (bi)plot.
# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2698020 0.2400990 0.2475248 0.2425743
##
## Group means:
## zn indus chas nox rm age
## low 0.9684189 -0.8947956 -0.16396851 -0.8784937 0.413537125 -0.8650159
## med_low -0.1026721 -0.3620347 -0.06938576 -0.5822458 -0.095277587 -0.3525679
## med_high -0.3929105 0.1848333 0.23949396 0.4005512 0.003124906 0.4620471
## high -0.4872402 1.0171960 -0.19198008 1.0449869 -0.402303021 0.8151124
## dis rad tax ptratio black lstat
## low 0.8857103 -0.6942276 -0.7447710 -0.41933026 0.38369176 -0.74887216
## med_low 0.3940016 -0.5402442 -0.5228745 -0.07850848 0.33562134 -0.16704198
## med_high -0.4302072 -0.4340524 -0.3311268 -0.28524210 0.07173206 0.07402776
## high -0.8603132 1.6373367 1.5134896 0.77985517 -0.93253844 0.88490789
## medv
## low 0.48501916
## med_low 0.04810769
## med_high 0.05047172
## high -0.72839589
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.09093921 0.598110952 -1.14012501
## indus 0.01845816 -0.171356609 0.18491885
## chas -0.15206953 -0.188803313 -0.01998240
## nox 0.38110090 -0.705886942 -1.15050102
## rm -0.13337127 -0.035256090 -0.14535542
## age 0.19688551 -0.338087327 -0.17555977
## dis -0.10598375 -0.138021476 0.43648847
## rad 3.33232757 1.084974149 -0.03876362
## tax 0.08466762 -0.175005320 0.58183317
## ptratio 0.08003447 0.006976898 -0.26787488
## black -0.13253991 0.005844627 0.11646955
## lstat 0.23482197 -0.181597547 0.45983466
## medv 0.15819788 -0.299709673 0.01961873
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9532 0.0360 0.0108
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col=classes, pch=classes)
lda.arrows(lda.fit, myscale = 1)
Here we can see the results of the LDA. Each color represents a class of the target variable. The predictor variables are the arrows in the middle of the picture, the length and the direction of the arrow depicting the effect of the predictor. It seems that here the variables rad, zn and nox discriminate/separate the classes the best.
Now we use the built LDA model to predict the classes on the test data. LDA calculates the probability of the new observation for belonging in each of the classes and then the observation is classified to the class of the highest probability.
First, let’s save the correct classes and then remove the crime variable.
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 11 7 0 0
## med_low 5 15 9 0
## med_high 0 8 16 2
## high 0 0 0 29
The prediction would have been perfect if all the values were on the diagonal. Certainly this is not the case but the largest values are on the diagonal. There seems to be some mixing with the first three classes but the last class (high) is most correctly predicted. This was to be expected based on the training set figure.
Different distances (e.g. Euclidean or Manhattan) are used to see if observations are similar or dissimilar with each other. Similar observations form clusters which can be found by different methods (e.g. k-means).
Let’s find clusters on the Boston dataset using k-means. First, let’s reload the dataset and standardize it to get comparable distances (Euclidean and Manhattan). Then let’s run the k-means algorithm on the dataset.
# reload Boston from MASS
library(MASS)
library(ggplot2)
data('Boston');
# center and standardize variables
boston_scaled <- scale(Boston)
# euclidean distance matrix
dist_eu <- dist(boston_scaled)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(boston_scaled,method="manhattan")
# look at the summary of the distances
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
On can see that the Manhattan distance gives much larger values compared to the Euclidean distnace. For now, however, let’s use the Euclidean distance.
# k-means clustering
km1 <-kmeans(boston_scaled, centers = 1)
km2 <-kmeans(boston_scaled, centers = 2)
km3 <-kmeans(boston_scaled, centers = 3)
km4 <-kmeans(boston_scaled, centers = 4)
# plot the Boston dataset with clusters
pairs(boston_scaled, col = km1$cluster) # 1 cluster
pairs(boston_scaled, col = km2$cluster) # 2 clusters
pairs(boston_scaled, col = km3$cluster) # 3 clusters
pairs(boston_scaled, col = km4$cluster) # 4 clusters
# too general view, make smaller
pairs(boston_scaled[,6:10], col = km2$cluster) # 2 clusters
pairs(boston_scaled[,6:10], col = km3$cluster) # 3 clusters
pairs(boston_scaled[,6:10], col = km4$cluster) # 4 clusters
Different number of centers (1,2,3 or 4) were used for k-means clustering. One cluster seemed to be too few, since new clusters started appearing, however, four clusters did not bring a dramatic difference to the game (i.e. the centroid and the clusters did not change). Thus the optimal number was found to be 2 or 3 clusters.
set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')+scale_x_continuous(breaks = 1:10,labels=1:10)
The total within sum of squares (TWSS) would indicate 2 be the optimal number, since that is the number when TWSS changes radically (from 1 to 2).
# Run the code below for the (scaled) train data that you used to fit the LDA.
#The code creates a matrix product, which is a projection of the data points.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
# Next, install and access the plotly package. Create a 3D plot (**Cool!**)
# of the columns of the matrix product by typing the code below."
library(plotly)
# Note! To install plotly in Linux, remember to install libcurl from terminal.
# * deb: libcurl4-openssl-dev (Debian, Ubuntu, etc)
# * rpm: libcurl-devel (Fedora, CentOS, RHEL)
# * csw: libcurl_dev (Solaris)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color=train$crime)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
Intro from the IODS MOOC-page:
Actually, a fairly large selection of statistical methods can be listed under the title “dimensionality reduction techniques”. Most often (nearly always, that is!) the real-world phenomena are multidimensional: they may consist of not just two or three but 5 or 10 or 20 or 50 (or more) dimensions. Of course, we are living only in a three-dimensional (3D) world, so those multiple dimensions may really challenge our imagination. It would be easier to reduce the number of dimensions in one way or another.
We shall now learn the basics of two data science based ways of reducing the dimensions. The principal method here is principal component analysis (PCA), which reduces any number of measured (continuous) and correlated variables into a few uncorrelated components that collect together as much variance as possible from the original variables. The most important components can be then used for various purposes, e.g., drawing scatterplots and other fancy graphs that would be quite impossible to achieve with the original variables and too many dimensions.
Multiple correspondence analysis (MCA) and other variations of CA bring us similar possibilities in the world of discrete variables, even nominal scale (classified) variables, by finding a suitable transformation into continuous scales and then reducing the dimensions quite analogously with the PCA. The typical graphs show the original classes of the discrete variables on the same “map”, making it possible to reveal connections (correspondences) between different things that would be quite impossible to see from the corresponding cross tables (too many numbers!).
Briefly stated, these methods help to visualize and understand multidimensional phenomena by reducing their dimensionality that may first feel impossible to handle at all.
This chapter’s dataset originates from the United Nations Development Programme. The Human Development Index (HDI) was created for assessig the development of a country in another way than just economic growth. More information can be found on their general data page and on pdf about calculating the human development indices.
# read the human data, row names as first column
human <- read.csv("./data/human.csv", row.names=1)
str(human)
## 'data.frame': 155 obs. of 8 variables:
## $ Edu2.FM : num 1.007 0.997 0.983 0.989 0.969 ...
## $ Labo.FM : num 0.891 0.819 0.825 0.884 0.829 ...
## $ Life.Exp : num 81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
## $ Edu.Exp : num 17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
## $ GNI : int 64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
## $ Mat.Mor : int 4 6 6 5 6 7 9 28 11 8 ...
## $ Ado.Birth: num 7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
## $ Parli.F : num 39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
The dataset consists 155 observations (i.e. countries) in 8 variables:
The data combines several indicators for the countries:
GNI, Life.Exp, Edu.Exp, Mat.Mor, Ado.BirthParli.F, Edu2.FM,Labo.FMMost of the variable names have been shortened from the original data and two new variables (Edu2.FM and Labo.FM) were computed.
library(ggplot2) # for graphics
library(GGally)
library(corrplot)
library(dplyr)
summary(human)
## Edu2.FM Labo.FM Life.Exp Edu.Exp
## Min. :0.1717 Min. :0.1857 Min. :49.00 Min. : 5.40
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:66.30 1st Qu.:11.25
## Median :0.9375 Median :0.7535 Median :74.20 Median :13.50
## Mean :0.8529 Mean :0.7074 Mean :71.65 Mean :13.18
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:77.25 3rd Qu.:15.20
## Max. :1.4967 Max. :1.0380 Max. :83.50 Max. :20.20
## GNI Mat.Mor Ado.Birth Parli.F
## Min. : 581 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 12040 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 17628 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :123124 Max. :1100.0 Max. :204.80 Max. :57.50
ggpairs(human, upper = list(continuous = wrap("cor", family="sans"))) # graphical overview
There is much skewness present in the variable, i.e. only Edu2.FM and Edu.Exp are somewhat normally distributed. Many variables are strongly correlated as implied by the statistically significant correlation coefficients. The correlation can be visualized better with correlation plot:
# compute the correlation matrix and visualize it with corrplot
cor(human) %>% corrplot(type="lower") # lower correlation matrix, since symmetric
The correlation matrix gives us an idea of the relation of the variables, i.e. there seem to be great
negative correlations between maternal mortality ratio and life expectency and expected years of schooling and ratio of secondary education in females/males (Mat.Mor and Life.Exp&Edu.Exp&Edu2.FM). Similarly with Adolescent birth rate Ado.Birth.
positive correlations between Education expectency and Life expectency and Adolescent birth rate and Maternal mortality
Percentage of female representatives in parliament (Parli.F) and Ratio of labour force participation rate in females compared to males (Labo.FM) don’t seem to be correlated with any variable. There is, however, a slight correlation between them.
Principal component analysis (PCA) is a statistical procedure which reduces the number of dimensions in multivariate data. Reducing dimension is essential in representing the phenomenon of interest clearly without “too much” distracting information (non-related data, noise or random error).
The idea of PCA is to transform the data to a new space with equal or less number of dimensions (new features). These new features are called principal components (PC). The first PC captures the maximum amount of variance from the features in the original data. The second PC captures the maximum amount of variability left etc. All PC are uncorrelated and orthogonal to each other.
Since PCA is sensitive to variance, let us make two analyses: one without and one with standardized dataset.
# PCA on non-standardized human data (with the SVD method)
pca_human <- prcomp(human)
# printing summary of PCA
s <- summary(pca_human)
s
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## Standard deviation 1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912 0.1591
## Proportion of Variance 9.999e-01 0.0001 0.00 0.00 0.000 0.000 0.0000 0.0000
## Cumulative Proportion 9.999e-01 1.0000 1.00 1.00 1.000 1.000 1.0000 1.0000
# print out rounded percetanges of variance captured by each PC
pca_pr <- round(100*s$importance[2, ], digits = 1)
pca_pr
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## 100 0 0 0 0 0 0 0
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
# draw a biplot of the PC representation and the original variables
biplot(pca_human, choices = 1:2, cex=c(0.8,1), col=c("grey40","deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2], caption="PC1 is the strongest")
# standardize the variables by the function scale (the z-score)
human_std <- scale(human)
# print out summaries of the standardized variables (mean = 0)
summary(human_std)
## Edu2.FM Labo.FM Life.Exp Edu.Exp
## Min. :-2.8189 Min. :-2.6247 Min. :-2.7188 Min. :-2.7378
## 1st Qu.:-0.5233 1st Qu.:-0.5484 1st Qu.:-0.6425 1st Qu.:-0.6782
## Median : 0.3503 Median : 0.2316 Median : 0.3056 Median : 0.1140
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5958 3rd Qu.: 0.7350 3rd Qu.: 0.6717 3rd Qu.: 0.7126
## Max. : 2.6646 Max. : 1.6632 Max. : 1.4218 Max. : 2.4730
## GNI Mat.Mor Ado.Birth Parli.F
## Min. :-0.9193 Min. :-0.6992 Min. :-1.1325 Min. :-1.8203
## 1st Qu.:-0.7243 1st Qu.:-0.6496 1st Qu.:-0.8394 1st Qu.:-0.7409
## Median :-0.3013 Median :-0.4726 Median :-0.3298 Median :-0.1403
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.3712 3rd Qu.: 0.1932 3rd Qu.: 0.6030 3rd Qu.: 0.6127
## Max. : 5.6890 Max. : 4.4899 Max. : 3.8344 Max. : 3.1850
# perform principal component analysis (with the SVD method)
pca2_human <- prcomp(human_std)
# printing summary of PCA
s2 <- summary(pca2_human)
s2
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 2.0708 1.1397 0.87505 0.77886 0.66196 0.53631 0.45900
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595 0.02634
## Cumulative Proportion 0.5361 0.6984 0.79413 0.86996 0.92473 0.96069 0.98702
## PC8
## Standard deviation 0.32224
## Proportion of Variance 0.01298
## Cumulative Proportion 1.00000
# print out rounded percetanges of variance captured by each PC
pca2_pr <- round(100*s2$importance[2, ], digits = 1)
pca2_pr
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## 53.6 16.2 9.6 7.6 5.5 3.6 2.6 1.3
# create object pc_lab to be used as axis labels
pc_lab2 <- paste0(names(pca2_pr), " (", pca2_pr, "%)")
# draw a biplot of the PC representation and the original variables
biplot(pca2_human, choices = 1:2, cex=c(0.8,1), col=c("grey40","deeppink2"), xlab = pc_lab2[1], ylab = pc_lab2[2], caption="After standardization of variables")
The results are very different based on whether or not the standardization was made.
Without normalization of data, there is practically only one PC which explains 100% of the variance in the data. The variable gross national income per capita (GNI) dominates the PCA due to its absolute values being 10- or even 100-fold larger than the other variables. Thus, as expected, there is a crucial need for standardization!
After standardization, the effect of other variables than just GNI becomes clearer. The first PC explains 53.6% of the variability and the second 16.2% covering roughly a total of 70% of the variability in the data. Out of the variables, Parli.F and Labo.FM have high positive correlation with PC2 (the angle between the y-axis is small) while all other variables have a high correlation with PC1 (x-axis). This kind of division could be observed already in the correlation plot earlier.
Regarding the interpretation of the dimensions in the standardized data PCA, it seems that PC1 is some general index for health and knowledge (“standard of living”) while PC2 is some female empowerment in the society.
While PCA reduced the dimension for continuous variables, Multiple Correspondence Analysis (MCA) is an analogous method for categorical variables.
The
teadataset (from the packageFactoMineR) consists of tea consumption survey results from 300 tea consumers. For our purpose we will choose only six columns (the same as in the course DataCamp exercises).
library(tidyr)
library(FactoMineR)
data("tea") # Load data tea from FactoMineR package
# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
# select the 'keep_columns' to create a new dataset
tea_time <- dplyr::select(tea, one_of(keep_columns))
# look at the structure of the data
str(tea_time)
## 'data.frame': 300 obs. of 6 variables:
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
Exploring our data reveals that we have 300 observations in 6 variables. All the variables are of type factor with several levels.
# numerical summary
summary(tea_time)
## Tea How how sugar
## black : 74 alone:195 tea bag :170 No.sugar:155
## Earl Grey:193 lemon: 33 tea bag+unpackaged: 94 sugar :145
## green : 33 milk : 63 unpackaged : 36
## other: 9
## where lunch
## chain store :192 lunch : 44
## chain store+tea shop: 78 Not.lunch:256
## tea shop : 30
##
# visualize the dataset
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped
All of the variables have a certain dominating factor except for sugar which evenly divided between sugar and no sugar.
Now let’s perform a MCA to our tea dataset!
# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)
# summary of the model
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6 Dim.7
## Variance 0.279 0.261 0.219 0.189 0.177 0.156 0.144
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519 7.841
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953 77.794
## Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.141 0.117 0.087 0.062
## % of var. 7.705 6.392 4.724 3.385
## Cumulative % of var. 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr cos2
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139 0.003
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626 0.027
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111 0.107
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841 0.127
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979 0.035
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990 0.020
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347 0.102
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459 0.161
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968 0.478
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898 0.141
## v.test Dim.3 ctr cos2 v.test
## black 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 2.867 | 0.433 9.160 0.338 10.053 |
## green -5.669 | -0.108 0.098 0.001 -0.659 |
## alone -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 3.226 | 1.329 14.771 0.218 8.081 |
## milk 2.422 | 0.013 0.003 0.000 0.116 |
## other 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
The output of the MCA summary is somewhat lengthy. However, it contains information about
Eigenvalues: the variances and the percentage of variances retained by each dimension
Individuals: the individuals’ coordinates, the individuals contribution (%) on the dimension and the cos2 (the squared correlations) on the dimensions
Categories: the coordinates of the variable categories, the contribution(%) the cos2 (the squared correlations) and v.test value. The v.test follows normal distribution: if the value is below/above +/- 1.96, the coordinate is significantly different from zero.
Categorical variables: the squared correlation between each variable and the dimensions. If the value is close to 1 it indicates strong link with the variable and dimension.
Here it seems there is no dominating dimensions (the first and the second dimension explain only about 15% and 14% of the variation, respectively).
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali")
Visualizing the data does not give a better insight about the matter either. By interpreting the biggest ranges in the dimentions, the first dimension might tell about how the tea is consumed (in a tea bag vs. unpackaged) while the second dimension tells where the tea is bought (tea shop vs. chain store). Observing the clusters, it seems that e.g. unpackaged and tea shop are very similar as well as tea bag and chain store tea have a connection. However, many of the connections remain fuzzy. It is sometimes possible that despite all the advanced methods, obtaining insights from the data is difficult.
Intro from the IODS MOOC-page:
After working hard with multivariate, mostly explorative, even heuristic techniques that are common in data science, the last topic of the course will take us back in the task of building statistical models.
The new challenge is that the data will include two types of dependencies simultaneously: In addition to the correlated variables that we have faced with all models and methods so far, the observations of the data will also be correlated with each other.
Usually, we can assume that the observations are not correlated - instead, they are assumed to be independent of each other. However, in longitudinal settings this assumption seldom holds, because we have multiple observations or measurements of the same individuals. The concept of repeated measures highlights this phenomenon that is actually quite typical in many applications. Both types of dependencies (variables and observations) must be taken into account; otherwise the models will be biased.
To analyze this kind of data sets, we will focus on a single class of methods, called linear mixed effects models that can cope quite nicely with the setting described above.
Before we consider two examples of mixed models, namely the random intercept model and the random intercept and slope model, we will learn how to wrangle longitudinal data in wide form and long form, take a look at some graphical displays of longitudinal data, and try a simple summary measure approach that may sometimes provide a useful first step in these challenges. In passing, we “violently” apply the usual “fixed” models (although we know that they are not the right choice here) in order to compare the results and see the consequences of making invalid assumptions.
The RATS data is from a nutrition study conducted in three groups of rats (Crowder and Hand, 1990. The three groups were put on different diets, and each animals body weight (grams) was recorded repeatedly approximately weekly over a 9-week period (the weighting day number is given). The most interesting question is whether the growth profiles of the three groups differ.
# Read in the RATS data in wide and long format (L for long)
RATS <- read.table("./data/RATS.txt", header=TRUE)
RATSL <- read.table("./data/RATSL.txt", header=TRUE)
# factor the categorical variables ID and group
RATSL$ID <- as.factor(RATSL$ID)
RATSL$Group <- as.factor(RATSL$Group)
# Look at the WHOLE data in wide format
RATS
## ID Group WD1 WD8 WD15 WD22 WD29 WD36 WD43 WD44 WD50 WD57 WD64
## 1 1 1 240 250 255 260 262 258 266 266 265 272 278
## 2 2 1 225 230 230 232 240 240 243 244 238 247 245
## 3 3 1 245 250 250 255 262 265 267 267 264 268 269
## 4 4 1 260 255 255 265 265 268 270 272 274 273 275
## 5 5 1 255 260 255 270 270 273 274 273 276 278 280
## 6 6 1 260 265 270 275 275 277 278 278 284 279 281
## 7 7 1 275 275 260 270 273 274 276 271 282 281 284
## 8 8 1 245 255 260 268 270 265 265 267 273 274 278
## 9 9 2 410 415 425 428 438 443 442 446 456 468 478
## 10 10 2 405 420 430 440 448 460 458 464 475 484 496
## 11 11 2 445 445 450 452 455 455 451 450 462 466 472
## 12 12 2 555 560 565 580 590 597 595 595 612 618 628
## 13 13 3 470 465 475 485 487 493 493 504 507 518 525
## 14 14 3 535 525 530 533 535 540 525 530 543 544 559
## 15 15 3 520 525 530 540 543 546 538 544 553 555 548
## 16 16 3 510 510 520 515 530 538 535 542 550 553 569
# Look at the first 24 rows of long format
head(RATSL, 24)
## ID Group WD Weight Time
## 1 1 1 WD1 240 1
## 2 2 1 WD1 225 1
## 3 3 1 WD1 245 1
## 4 4 1 WD1 260 1
## 5 5 1 WD1 255 1
## 6 6 1 WD1 260 1
## 7 7 1 WD1 275 1
## 8 8 1 WD1 245 1
## 9 9 2 WD1 410 1
## 10 10 2 WD1 405 1
## 11 11 2 WD1 445 1
## 12 12 2 WD1 555 1
## 13 13 3 WD1 470 1
## 14 14 3 WD1 535 1
## 15 15 3 WD1 520 1
## 16 16 3 WD1 510 1
## 17 1 1 WD8 250 8
## 18 2 1 WD8 230 8
## 19 3 1 WD8 250 8
## 20 4 1 WD8 255 8
## 21 5 1 WD8 260 8
## 22 6 1 WD8 265 8
## 23 7 1 WD8 275 8
## 24 8 1 WD8 255 8
Looking at the datasets above implicitly shows the difference between the wide and the long format of data!
Wide format: a subject’s repeated responses will be in a single row and each response is in a separate column
Long format: each row is one time point per subject
The reason for setting the data in one format or the other is usually simply because different analyses require different set ups. But in addition to technical requirements, the each approach has analytical implications, i.e. the wide format emphasizes the subject while the long format emphasizes the measurement occasion.
Note! The wide format has 16 observations of 13 variables and the long format 176 observations of 5 variables. Here the variable Time was added to the long format data to directly show the numerical weighting day number (instead of string “WD1” etc.) for easier plotting.
According to Diggle et al. (2002), the central idea of the graphical displays of the data is to
Show as much of the relevant raw data as possible rather than only data summaries
Highlight aggregate pattern of potential scientific interest
Identify both cross sectional and longitudinal patterns
Try to make the identification of unusual individuals or unusual observations simple.
Let’s apply the idea to the RATS data.
# Access the package ggplot2
library(ggplot2)
library(tidyr)
library(dplyr)
# Draw the plot
ggplot(RATSL, aes(x = Time, y = Weight, linetype = ID)) +
geom_line() +
scale_linetype_manual(values = rep(1:10, times=4)) +
facet_grid(. ~ Group, labeller = label_both) +
theme(legend.position = "none") +
scale_y_continuous(limits = c(min(RATSL$Weight), max(RATSL$Weight)))+
theme(plot.title = element_text(face="bold",hjust=0.5))+labs(title="Plot of individual rat growth profiles by group")
# Plot the RATSL data
ggplot(RATSL, aes(x = Time, y = Weight, group = ID)) +
geom_line(aes(linetype=Group))+scale_x_continuous(name = "Time (days)", breaks = seq(0, 60, 10))+scale_y_continuous(name = "Weight (grams)")+theme(legend.position = "top",plot.title = element_text(face="bold",hjust=0.5))+labs(title="Plot of individual rat growth profiles in one figure")
Observations from the data:
The weight of almost all the rats is increasing over the nine weeks of the study.
In the beginning of the study, the weight distribution of the rats seems to be Group 1 << Groups 2 < Group 3.
All the subjects inside each group are relatively similar in terms of weight except for one subject in group 2 (possible outlier?).
The increase of weight seems to be greater in Groups 2 & 3 (over 50 g) compared to Group 1 (roughly 30 g).
With large number of observations, graphical displays of individual re- sponse profiles are of little use and investigators then commonly produce graphs showing average profiles for each treatment group along with some indication of the variation of the observations at each time point. Here we provide two alternatives, a continuous mean curve as well as a dicrete side-by-side boxplot of weights at each timepoint.
# Number of days, baseline (day 1) included
n <- RATSL$Time %>% unique() %>% length()
# Summary data with mean and standard error of RATS by Group and Time
RATSS <- RATSL %>%
group_by(Group, Time) %>%
summarise( mean = mean(Weight), se = sd(Weight)/sqrt(n) ) %>%
ungroup()
# Glimpse the data
glimpse(RATSS)
## Rows: 33
## Columns: 4
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, …
## $ Time <int> 1, 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 1, 8, 15, 22, 29, 36,…
## $ mean <dbl> 250.625, 255.000, 254.375, 261.875, 264.625, 265.000, 267.375, …
## $ se <dbl> 4.589478, 3.947710, 3.460116, 4.100800, 3.333956, 3.552939, 3.3…
# Plot the mean weight profiles
ggplot(RATSS, aes(x = Time, y = mean, linetype = Group, shape = Group)) +
geom_line() +
scale_linetype_manual(values = c(1,2,3)) +
geom_point(size=3) +
scale_shape_manual(values = c(1,2,3)) +
#geom_errorbar(aes(ymin=mean-se, ymax=mean+se, linetype="1"), width=0.3) +
theme(legend.position = c(0.9,0.5),plot.title = element_text(face="bold",hjust=0.5)) +
scale_y_continuous(name = "mean(Weight) +/- se(Weight)")+
labs(title="Mean weight profiles for the three groups in the RATS data")
# Boxplots
RATSS2 <- RATSS
RATSS2$Time <- as.factor(RATSS2$Time)
ggplot(RATSS2, aes(x = Time, y = mean, fill = Group)) +
geom_boxplot() +
theme(legend.position = c(0.9,0.5),plot.title = element_text(face="bold",hjust=0.5)) +
scale_y_continuous(name = "mean(Weight) +/- se(Weight)")+
labs(title="Boxplots for the RATS data")
The discrete side-by-side boxplot does not give any extra insights about data (e.g. outliers). The continuous average profile plot is more clear in this case. The average profile plot clearly indicates the general increase in weight over the 60 days in all groups. There is no overlap suggesting big differences between groups.
The summary measure method (also called the response feature method) transforms the repeated measurements of each individual in the study into a single value that captures some essential feature of the individuals response over time. Standard univariate methods are then applied to the summary measures created from the sample of subjects.
“The average response to treatment over time is often likely to be the most relevant summary statistic in treatment trials.” – Frison and Pocock (1992)
Applying the age-old wisdom above, let’s choose the mean of weight at a given day as the summary variable for each group.
# Create a summary data by Group and subject with mean as the summary variable (ignoring baseline Time 1).
RATSL8S <- RATSL %>%
filter(Time > 1) %>%
group_by(Group, Time) %>%
summarise(mean=mean(Weight) ) %>%
ungroup()
# Glimpse the data
glimpse(RATSL8S)
## Rows: 30
## Columns: 3
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, …
## $ Time <int> 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 8, 15, 22, 29, 36, 43, 4…
## $ mean <dbl> 255.000, 254.375, 261.875, 264.625, 265.000, 267.375, 267.250, …
# Draw a boxplot of the mean versus Group
ggplot(RATSL8S, aes(x = Group, y = mean)) +
geom_boxplot() +
stat_summary(fun = "mean", geom = "point", shape=23, size=4, fill = "white") +
scale_y_continuous(name = "mean(Weight), Time 8-60 days")+
theme(plot.title = element_text(face="bold",hjust=0.5)) +
labs(title="Boxplots of mean summary measure for the three groups in the RATS data")
Group 1 is clearly different from the rest (this comes as no surprise based on the earlier graphical display). Groups 2 and 3 are overlapping a bit, though. There are three groups so two sample t-test is out of the questions. We could try analysis of variance (ANOVA). We are, however, forcing it a bit since the assumptions for ANOVA are
The observations are obtained independently (not true!) and randomly from the population defined by the factor levels.
The data of each factor level are normally distributed.
These normal populations have a common variance.
Well, let’s try ANOVA anyway.
# Add the baseline from the original data as a new variable to the summary data
RATSL8S2 <- RATSL8S %>%
mutate(baseline = c(rep(RATSS$mean[1],10),rep(RATSS$mean[12],10),rep(RATSS$mean[23],10)))
# Fit the linear model with the mean as the response
fit <- lm(mean ~ baseline + Group, data = RATSL8S2)
# Compute the analysis of variance table for the fitted model with anova()
anova(fit)
## Analysis of Variance Table
##
## Response: mean
## Df Sum Sq Mean Sq F value Pr(>F)
## baseline 1 398745 398745 2173.475 < 2.2e-16 ***
## Group 1 1582 1582 8.622 0.006713 **
## Residuals 27 4953 183
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The results show that the baseline (weight observation on day one) is highly significant variable (p<0.001). Also the Group variable has a relatively significant role (p<0.01) meaning Groups 2 & 3 really are different.
The brief psychiatric rating scale (BPRS) was used to evaluate 40 male subjects suspected of having schizophrenia. Subjects were randomly assigned to one of the two treatment groups and each subject was measured on BPRS before treatments (week 0) and then at weekly intervals for eight weeks. The BPRS value is the sum of 18 symptom constructs (e.g. hostility, suspiciousness, hallucination, grandiosity etc.) each of which are rated from one (not present) to seven (extremely severe). The data originates from Davis (2002).
# Read in BPRS data; wide and long versions
BPRS <- read.table("./data/BPRS.txt", header=TRUE)
BPRSL <- read.table("./data/BPRSL.txt", header=TRUE)
# Factor variables ID and Group
BPRSL$subject <- as.factor(BPRSL$subject)
BPRSL$treatment <- as.factor(BPRSL$treatment)
# Look at the wide data
glimpse(BPRS)
## Rows: 40
## Columns: 11
## $ treatment <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ subject <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, …
## $ week0 <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 66,…
## $ week1 <int> 36, 68, 55, 77, 75, 43, 61, 36, 43, 51, 34, 52, 32, 35, 68,…
## $ week2 <int> 36, 61, 41, 49, 72, 41, 47, 38, 39, 51, 34, 49, 36, 36, 65,…
## $ week3 <int> 43, 55, 38, 54, 65, 38, 30, 38, 35, 55, 41, 54, 31, 34, 49,…
## $ week4 <int> 41, 43, 43, 56, 50, 36, 27, 31, 28, 53, 36, 48, 25, 25, 36,…
## $ week5 <int> 40, 34, 28, 50, 39, 29, 40, 26, 22, 43, 36, 43, 25, 27, 32,…
## $ week6 <int> 38, 28, 29, 47, 32, 33, 30, 26, 20, 43, 38, 37, 21, 25, 27,…
## $ week7 <int> 47, 28, 25, 42, 38, 27, 31, 25, 23, 39, 36, 36, 19, 26, 30,…
## $ week8 <int> 51, 28, 24, 46, 32, 25, 31, 24, 21, 32, 36, 31, 22, 26, 37,…
# Look at the long data
glimpse(BPRSL)
## Rows: 360
## Columns: 5
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ subject <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, …
## $ weeks <chr> "week0", "week0", "week0", "week0", "week0", "week0", "week…
## $ bprs <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 66,…
## $ week <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
The main focus is the variable bprs. Similar to the RATS data, we also introduce the data in long format for the analysis. In the process of data wrangling the wide format to long format, the variable week was added to indicate week as a simple integer value.
Let’s start by getting a rough idea about the data and not worry too much about the longitudinal nature of the data.
# Draw the plot
ggplot(BPRSL, aes(x = week, y = bprs)) +
geom_point(aes(shape=treatment,color=treatment)) +
theme(legend.position = "top",plot.title = element_text(face="bold", hjust=0.5)) +
scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))+
labs(title="Plots of bprs against week,\n ignoring repeated measure structure but retaining treatment group")
Seems pretty much like a mixed bag! Now, let’s take note of the longitudinal nature by connecting the individual profile points.
# Draw the plot
ggplot(BPRSL, aes(x = week, y = bprs, linetype = subject)) +
geom_line() +
scale_linetype_manual(values = rep(1:10, times=4)) +
theme(legend.position = "none") +
scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))+
theme(plot.title = element_text(face="bold",hjust=0.5)) +
labs(title="Plots of individual bprs profile")
ggplot(BPRSL, aes(x = week, y = bprs, linetype = subject)) +
geom_line() +
scale_linetype_manual(values = rep(1:10, times=4)) +
facet_grid(. ~ treatment, labeller = label_both) +
theme(legend.position = "none") +
scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))+
theme(plot.title = element_text(face="bold",hjust=0.5)) +
labs(title="Plots of individual bprs profile by treatment")
# Try scatterplot matrix, colored by treatment group
pairs(BPRS, col=BPRS$treatment)
After connecting the data points for each subject, the plot makes more sense. However, it is still difficult to get any insights from the data. The scatterplot matrix of the repeated measures of bprs is not a terribly helpful addition either, but it does demonstrate that the repeated measures are certainly not independent of one another. What one can say, nevertheless, is that
the bprs value seems to be generally decreasing over the time
there seems be somewhat much variance in the data of each individual
the bprs values for treatment groups 1 and 2 are overlapping
Let’s fit a simple linear model to the BPRS data.
# create a regression model RATS_reg
BPRS_reg <- lm(bprs ~ week + treatment, data=BPRSL)
# print out a summary of the model
summary(BPRS_reg)
##
## Call:
## lm(formula = bprs ~ week + treatment, data = BPRSL)
##
## Residuals:
## Min 1Q Median 3Q Max
## -22.454 -8.965 -3.196 7.002 50.244
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 46.4539 1.3670 33.982 <2e-16 ***
## week -2.2704 0.2524 -8.995 <2e-16 ***
## treatment2 0.5722 1.3034 0.439 0.661
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared: 0.1851, Adjusted R-squared: 0.1806
## F-statistic: 40.55 on 2 and 357 DF, p-value: < 2.2e-16
Deciphering from the summary, the intercept and week (time) are statistically significant variables in the model. Interestingly, however, the treatment group is not! Anyway, this model forms the “baseline” against which the next models will be compared.
While the linear model above assumes independence of the repeated measured of weight, that is not the case in reality. Thus we will next use a random intercept model which allows the linear regression fit for each subject to differ in intercept from other subjects.
# access library lme4
library(lme4)
# Create a random intercept model
BPRS_ref <- lmer(bprs ~ week + treatment + (1 | subject), data = BPRSL, REML = FALSE)
# Print the summary of the model
summary(BPRS_ref)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: bprs ~ week + treatment + (1 | subject)
## Data: BPRSL
##
## AIC BIC logLik deviance df.resid
## 2748.7 2768.1 -1369.4 2738.7 355
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0481 -0.6749 -0.1361 0.4813 3.4855
##
## Random effects:
## Groups Name Variance Std.Dev.
## subject (Intercept) 47.41 6.885
## Residual 104.21 10.208
## Number of obs: 360, groups: subject, 20
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 46.4539 1.9090 24.334
## week -2.2704 0.2084 -10.896
## treatment2 0.5722 1.0761 0.532
##
## Correlation of Fixed Effects:
## (Intr) week
## week -0.437
## treatment2 -0.282 0.000
Allowing the intercept to vary increases the flexibility (and complexity) of the model. One can see that the Fixed Effects model here is actually the same linear model fit earlier. Let’s increase the flexibility even further.
In addition to difference in the interecept, let’s allow also the slopes to differ.
# create a random intercept and random slope model
BPRS_ref1 <- lmer(bprs ~ week + treatment + (week | subject), data = BPRSL, REML = FALSE)
# print a summary of the model
summary(BPRS_ref1)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject)
## Data: BPRSL
##
## AIC BIC logLik deviance df.resid
## 2745.4 2772.6 -1365.7 2731.4 353
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.8919 -0.6194 -0.0691 0.5531 3.7976
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## subject (Intercept) 64.8222 8.0512
## week 0.9609 0.9802 -0.51
## Residual 97.4305 9.8707
## Number of obs: 360, groups: subject, 20
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 46.4539 2.1052 22.066
## week -2.2704 0.2977 -7.626
## treatment2 0.5722 1.0405 0.550
##
## Correlation of Fixed Effects:
## (Intr) week
## week -0.582
## treatment2 -0.247 0.000
# perform an ANOVA test on the two models
anova(BPRS_ref1, BPRS_ref)
## Data: BPRSL
## Models:
## BPRS_ref: bprs ~ week + treatment + (1 | subject)
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
## npar AIC BIC logLik deviance Chisq Df Pr(>Chisq)
## BPRS_ref 5 2748.7 2768.1 -1369.4 2738.7
## BPRS_ref1 7 2745.4 2772.6 -1365.7 2731.4 7.2721 2 0.02636 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Now we have drastically increased the complexity of the model compared to the baseline linea model (note also here the familiar Fixed effects model…). ANOVA test can be used to see whether increasing the complexity of the model is actually worth it; is the more complex model significantly better. Increasing parameters surely gives a better fit but also increases the complexity and calculation times.
Here it seems the most model certainly gives the best fit (p<0.05). The chi-squared (Chisq) tells the fit: the lower the value, the better the fit against the comparison model. Yet the model can be easily improved.
Finally, let’s allow the treatment x week interaction.
# create a random intercept and random slope model with the interaction
BPRS_ref2 <- lmer(bprs ~ week + treatment + week*treatment + (week | subject), data = BPRSL, REML = FALSE)
# print a summary of the model
summary(BPRS_ref2)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: bprs ~ week + treatment + week * treatment + (week | subject)
## Data: BPRSL
##
## AIC BIC logLik deviance df.resid
## 2744.3 2775.4 -1364.1 2728.3 352
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.0512 -0.6271 -0.0768 0.5288 3.9260
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## subject (Intercept) 64.9964 8.0620
## week 0.9687 0.9842 -0.51
## Residual 96.4707 9.8220
## Number of obs: 360, groups: subject, 20
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 47.8856 2.2521 21.262
## week -2.6283 0.3589 -7.323
## treatment2 -2.2911 1.9090 -1.200
## week:treatment2 0.7158 0.4010 1.785
##
## Correlation of Fixed Effects:
## (Intr) week trtmn2
## week -0.650
## treatment2 -0.424 0.469
## wek:trtmnt2 0.356 -0.559 -0.840
# perform an ANOVA test on the two models
anova(BPRS_ref2, BPRS_ref1)
## Data: BPRSL
## Models:
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
## BPRS_ref2: bprs ~ week + treatment + week * treatment + (week | subject)
## npar AIC BIC logLik deviance Chisq Df Pr(>Chisq)
## BPRS_ref1 7 2745.4 2772.6 -1365.7 2731.4
## BPRS_ref2 8 2744.3 2775.4 -1364.1 2728.3 3.1712 1 0.07495 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# draw the plot of BPRSL with the observed bprs values
ggplot(BPRSL, aes(x = week, y = bprs, group = subject)) +
geom_line(aes(linetype=subject)) +
facet_grid(. ~ treatment, labeller = label_both) +
scale_x_continuous(name = "Week") +
scale_y_continuous(name = "bprs") +
theme(legend.position = "none")+
theme(plot.title = element_text(face="bold",hjust=0.5)) +
labs(title="The Original\nPlots of individual bprs profile by treatment")
# Create a vector of the fitted values
Fitted <- fitted(BPRS_ref2)
# Create a new column fitted to RATSL
BPRSL$fitted <- Fitted
# draw the plot of BPRSL with the Fitted values of bprs
ggplot(BPRSL, aes(x = week, y = fitted, group = subject)) +
geom_line(aes(linetype = subject)) +
facet_grid(. ~ treatment, labeller = label_both) +
scale_x_continuous(name = "week") +
scale_y_continuous(name = "Fitted bprs") +
theme(legend.position = "none")+
theme(plot.title = element_text(face="bold",hjust=0.5)) +
labs(title="The Best\nRandom intercept and random slope model with interaction")
This is it! The best model we have (p<0.10 compared to the previous). Out of curiosity let’s also plot the previous models to see the comparison. It is very difficult to see the exact difference between the final model and the previous model without interaction. However, after that one clearly sees the difference.